90 research outputs found

    A Study of the Allan Variance for Constant-Mean Non-Stationary Processes

    Full text link
    The Allan Variance (AV) is a widely used quantity in areas focusing on error measurement as well as in the general analysis of variance for autocorrelated processes in domains such as engineering and, more specifically, metrology. The form of this quantity is widely used to detect noise patterns and indications of stability within signals. However, the properties of this quantity are not known for commonly occurring processes whose covariance structure is non-stationary and, in these cases, an erroneous interpretation of the AV could lead to misleading conclusions. This paper generalizes the theoretical form of the AV to some non-stationary processes while at the same time being valid also for weakly stationary processes. Some simulation examples show how this new form can help to understand the processes for which the AV is able to distinguish these from the stationary cases and hence allow for a better interpretation of this quantity in applied cases

    A simple recipe for making accurate parametric inference in finite sample

    Full text link
    Constructing tests or confidence regions that control over the error rates in the long-run is probably one of the most important problem in statistics. Yet, the theoretical justification for most methods in statistics is asymptotic. The bootstrap for example, despite its simplicity and its widespread usage, is an asymptotic method. There are in general no claim about the exactness of inferential procedures in finite sample. In this paper, we propose an alternative to the parametric bootstrap. We setup general conditions to demonstrate theoretically that accurate inference can be claimed in finite sample

    On the Properties of Simulation-based Estimators in High Dimensions

    Full text link
    Considering the increasing size of available data, the need for statistical methods that control the finite sample bias is growing. This is mainly due to the frequent settings where the number of variables is large and allowed to increase with the sample size bringing standard inferential procedures to incur significant loss in terms of performance. Moreover, the complexity of statistical models is also increasing thereby entailing important computational challenges in constructing new estimators or in implementing classical ones. A trade-off between numerical complexity and statistical properties is often accepted. However, numerically efficient estimators that are altogether unbiased, consistent and asymptotically normal in high dimensional problems would generally be ideal. In this paper, we set a general framework from which such estimators can easily be derived for wide classes of models. This framework is based on the concepts that underlie simulation-based estimation methods such as indirect inference. The approach allows various extensions compared to previous results as it is adapted to possibly inconsistent estimators and is applicable to discrete models and/or models with a large number of parameters. We consider an algorithm, namely the Iterative Bootstrap (IB), to efficiently compute simulation-based estimators by showing its convergence properties. Within this framework we also prove the properties of simulation-based estimators, more specifically the unbiasedness, consistency and asymptotic normality when the number of parameters is allowed to increase with the sample size. Therefore, an important implication of the proposed approach is that it allows to obtain unbiased estimators in finite samples. Finally, we study this approach when applied to three common models, namely logistic regression, negative binomial regression and lasso regression

    A penalized two-pass regression to predict stock returns with time-varying risk premia

    Full text link
    We develop a penalized two-pass regression with time-varying factor loadings. The penalization in the first pass enforces sparsity for the time-variation drivers while also maintaining compatibility with the no-arbitrage restrictions by regularizing appropriate groups of coefficients. The second pass delivers risk premia estimates to predict equity excess returns. Our Monte Carlo results and our empirical results on a large cross-sectional data set of US individual stocks show that penalization without grouping can yield to nearly all estimated time-varying models violating the no-arbitrage restrictions. Moreover, our results demonstrate that the proposed method reduces the prediction errors compared to a penalized approach without appropriate grouping or a time-invariant factor model
    • …
    corecore